972 research outputs found

    On the (un)importance of working memory in speech-in-noise processing for listeners with normal hearing thresholds

    Get PDF
    With the advent of cognitive hearing science, increased attention has been given to individual differences in cognitive functioning and their explanatory power in accounting for inter-listener variability in the processing of speech in noise (SiN). The psychological construct that has received much interest in recent years is working memory. Empirical evidence indeed confirms the association between WM capacity (WMC) and SiN identification in older hearing-impaired listeners. However, some theoretical models propose that variations in WMC are an important predictor for variations in speech processing abilities in adverse perceptual conditions for all listeners, and this notion has become widely accepted within the field. To assess whether WMC also plays a role when listeners without hearing loss process speech in adverse listening conditions, we surveyed published and unpublished studies in which the Reading-Span test (a widely used measure of WMC) was administered in conjunction with a measure of SiN identification, using sentence material routinely used in audiological and hearing research. A meta-analysis revealed that, for young listeners with audiometrically normal hearing, individual variations in WMC are estimated to account for, on average, less than 2% of the variance in SiN identification scores. This result cautions against the (intuitively appealing) assumption that individual variations in WMC are predictive of SiN identification independently of the age and hearing status of the listener

    Spatial release of masking in children and adults in non-individualized virtual environments

    Get PDF
    The spatial release of masking (SRM) is often measured in virtual auditory environments created from head-related transfer functions (HRTFs) of a standardized adult head. Adults and children, however, differ in head dimensions and mismatched HRTFs are known to affect some aspects of binaural hearing. So far, there has been little research on HRTFs in children and it is unclear whether a large mismatch of spatial cues can degrade speech perception in complex environments. In two studies, the effect of non-individualized virtual environments on SRM accuracy in adults and children was examined. The SRMs were measured in virtual environments created from individual and non-individualized HRTFs and the equivalent real anechoic environment. Speech reception thresholds (SRTs) were measured for frontal target sentences and symmetrical speech maskers at 0° or ±90° azimuth. No significant difference between environments was observed for adults. In 7 to 12-year-old children, SRTs and SRMs improved with age, with SRMs approaching adult levels. SRTs differed slightly between environments and were significantly worse in a virtual environment based on HRTFs from a spherical head. Adult HRTFs seem sufficient to accurately measure SRTs in children even in complex listening conditions

    Sinusoidal plucks and bows are not categorically perceived, either

    Full text link

    Contributions of temporal encodings of voicing, voicelessness, fundamental frequency, and amplitude variation to audiovisual and auditory speech perception

    Get PDF
    Auditory and audio-visual speech perception was investigated using auditory signals of invariant spectral envelope that temporally encoded the presence of voiced and voiceless excitation, variations in amplitude envelope and F-0. In experiment 1, the contribution of the timing of voicing was compared in consonant identification to the additional effects of variations in F-0 and the amplitude of voiced speech. In audio-visual conditions only, amplitude variation slightly increased accuracy globally and for manner features. F-0 variation slightly increased overall accuracy and manner perception in auditory and audio-visual conditions. Experiment 2 examined consonant information derived from the presence and amplitude variation of voiceless speech in addition to that from voicing, F-0, and voiced speech amplitude. Binary indication of voiceless excitation improved accuracy overall and for voicing and manner. The amplitude variation of voiceless speech produced only a small increment in place of articulation scores. A final experiment examined audio-visual sentence perception using encodings of voiceless excitation and amplitude variation added to a signal representing voicing and F-0. There was a contribution of amplitude variation to sentence perception, but not of voiceless excitation. The timing of voiced and voiceless excitation appears to be the major temporal cues to consonant identity. (C) 1999 Acoustical Society of America. [S0001-4966(99)01410-1]

    Native-language benefit for understanding speech-in-noise: The contribution of semantics

    Get PDF
    Bilinguals are better able to perceive speech-in-noise in their native compared to their non-native language. This benefit is thought to be due to greater use of higher-level, linguistic context in the native language. Previous studies showing this have used sentences and do not allow us to determine which level of language contributes to this context benefit. Here, we used a new paradigm that isolates the semantic level of speech, in both languages of bilinguals. Results revealed that in the native language, a semantically related target word facilitates the perception of a previously presented degraded prime word relative to when a semantically unrelated target follows the prime, suggesting a specific contribution of semantics to the native language context benefit. We also found the reverse in the non-native language, where there was a disadvantage of semantic context on word recognition, suggesting that such top-down, contextual information results in semantic interference in one's second languag

    Language Development and Impairment in Children With Mild to Moderate Sensorineural Hearing Loss.

    Get PDF
    PURPOSE: The goal of this study was to examine language development and factors related to language impairments in children with mild to moderate sensorineural hearing loss (MMHL). METHOD: Ninety children, aged 8-16 years (46 children with MMHL; 44 aged-matched controls), were administered a battery of standardized language assessments, including measures of phonological processing, receptive and expressive vocabulary and grammar, word and nonword reading, and parental report of communication skills. Group differences were examined after controlling for nonverbal ability. RESULTS: Children with MMHL performed as well as controls on receptive vocabulary and word and nonword reading. They also performed within normal limits, albeit significantly worse than controls, on expressive vocabulary, and on receptive and expressive grammar, and worse than both controls and standardized norms on phonological processing and parental report of communication skills. However, there was considerable variation in performance, with 26% showing evidence of clinically significant oral or written language impairments. Poor performance was not linked to severity of hearing loss nor age of diagnosis. Rather, outcomes were related to nonverbal ability, maternal education, and presence/absence of family history of language problems. CONCLUSIONS: Clinically significant language impairments are not an inevitable consequence of MMHL. Risk factors appear to include lower maternal education and family history of language problems, whereas nonverbal ability may constitute a protective factor
    corecore